Courses

In SIGGRAPH 2013 Courses, attendees learn from the experts in the field and gain inside knowledge that is critical to career advancement. Courses are short (1.5 hours) or half-day (3.25 hours) structured sessions that often include elements of interactive demonstration, performance, or other imaginative approaches to teaching.

The spectrum of Courses ranges from an introduction to the foundations of computer graphics and interactive techniques for those new to the field to advanced instruction on the most current techniques and topics. Courses include core curricula taught by invited instructors as well as Courses selected from juried proposals.


Introduction to Computer Graphics
Andrew Glassner (The Imaginary Institute)

The best way to get all you can out of your SIGGRAPH week is to start off with a solid understanding of the basics. This course covers the basics of 3D computer graphics in a friendly and visual way, without math or programming. The course is mostly made up of live demonstrations, because computer graphics is a great way to teach new ideas! Topics include the basic principles and language, so that you'll understand what's going on around you and enjoy meaningful conversations during SIGGRAPH 2013.

Mobile Game Creation for Everyone
Joel Van Eenwyk (Havok)
Walter Luh (Corona Labs)

An Introduction to OpenGL Programming
Edward Angel (University of New Mexico)
Dave Shreiner (ARM, Inc.)

Notes: shreiner-opengl.pdf

OpenGL is the most widely available library for creating interactive, computer graphics applications across all of the major computer operating systems. Its uses range from creating applications for scientific visualizations to computer-aided design, interactive gaming, and entertainment, and with each new version its capabilities reveal the most up-to-date features of modern graphics hardware.

This course provides an accelerated introduction to programming OpenGL, emphasizing the most modern methods for using the library. In recent years, OpenGL has undergone numerous updates, which have fundamentally changed how programmers interact with the application programming interface (API) and the skills required for being an effective OpenGL programmer. The most notable of these changes, the introduction of shader-based rendering, has expanded to subsume almost all functionality in OpenGL.

While there have been numerous courses on OpenGL in the past, the recent revisions to the API have provided a wealth of new functionality and features to create ever-richer content. This course builds from demonstrating the use of the most fundamental shader-based OpenGL pipeline to introducing numerous techniques that can be implemented using OpenGL.

Recent Advances in Light-Transport Simulation: Theory & Practice
Jaroslav Krivanek (Charles University in Prague)
Iliyan Georgiev (Universität des Saarlandes)
Anton S. Kaplanyan(Karlsruher Institut für Technologie)
Juan Cañada (Next Limit Technologies)

Notes: krivanek-notes.pdf

Robust and efficient light-transport simulation based on statistical methods is the subject of renewed research interest, propelled by the desire to accurately render general environments with complex materials and light sources, which is often difficult with current solutions. In addition, it has been recognized that advanced methods, which can render many effects in one pass without excessive tweaking, increase artists’ productivity and allow them to focus on their creative work. For this reason, the movie industry is shifting away from approximate rendering solutions toward physically based rendering methods, which poses new challenges in terms of strict requirements on high image quality and algorithm robustness.

Many of the recent advances in light-transport simulation, such as new Markov chain Monte Carlo methods or robust combination of bidirectional path tracing with photon mapping, are made possible by interpreting light transport as an integral in the space of light paths. However, there is a great deal of confusion among practitioners and researchers alike regarding these path-space methods.

The goal of this course is twofold. First, it presents a coherent review of the path-integral formulation of light transport and its applications, including the most recent ones, and it shows that rendering algorithms that may seem complex at first sight are, in fact, naturally derived from this general framework. A significant part of the course is devoted to application of Markov chain Monte Carlo methods for light-transport simulation, such as Metropolis Light Transport and its variants. The second part of the course discusses practical aspects of applying advanced light-transport simulation methods in the movie industry and other application domains, such as architectural and product visualization.

The Digital Production Pipeline
Darin Grant (Method Studios)
Kim Libreri (LucasFilm)
Steve Lavietes (Sony Pictures Imageworks)
Jonathan Gibbs (PDI/Dreamworks)
Barbara Ford Grant (How to Make Good Pictures, LLC)

It used to be about getting analogue data in and out of the digital world. Today it’s about connecting those various digital worlds into a fluid creative process.

While studios look to tighten budgets, minimize risk and broaden films’ reach worldwide across platforms, there is no standard operating procedure for studios to produce content. Shorter schedules, and globalization are as disruptive to the digital pipeline today as non-liner editing and digital cameras were to their analogue counterparts.

We'll walk through digital pipelines, expose a few of the methods and workflows that haven’t changed for far too long, and take a look at trends in production workflows that will allow studios to quickly adapt to these ever changing environments.

Turbulent Fluids
Nils Thuerey (ScanlineVFX GmbH)
Theodore Kim (University of California, Santa Barbara)
Tobias Pfaff (University of California, Berkeley)

Over the last decade, the visual effects industry has embraced physics simulations as a highly useful tool for creating realistic scenes ranging from a small camp fire to large-scale destruction of whole cities. While fluid simulations are now widely used in the industry, it is still inherently difficult to control large-scale simulations, and there is a constant struggle to increase visual detail.

This course approaches these problems using turbulence methods. Turbulent detail is what makes typical fluid simulations look impressive, and the underlying physics motivate a powerful approach for control; they allow for an elegant split of large-scale motion and small-scale turbulent detail. The result is a two-stage work flow that is highly convenient for artists. First, a rough, fast initial simulation is performed, then turbulent effects are added to enhance detail.

After reviewing the basics of fluid solvers and the popular wavelet turbulence approach, the course presents several powerful methods for capturing advanced effects such as boundary layers and turbulence with directional preferences. It also explains the difficulties of liquid simulations and presents an approach to liquid turbulence that is based on wave dynamics.

Full source code for all of the methods covered in the course is available to attendees. Instructors outline convenient starting points for navigating the code.

Digital Geometry Processing With Discrete Exterior Calculus
Fernando de Goes (California Institute of Technology)
Keenan Crane (California Institute of Technology)
Mathieu Desbrun (California Institute of Technology)
Peter Schröder (California Institute of Technology)

Notes: crane-dgp.pdf
Auxiliary Material: crane-dgp.zip

An introduction to geometry processing using discrete exterior calculus (DEC), which provides a simple, flexible, and efficient framework for building a unified geometry-processing platform. The course provides essential mathematical background as well as a large array of real-world examples. It also provides a short survey of the most relevant recent developments in digital geometry processing and discrete differential geometry. Compared to previous SIGGRAPH courses, this course focuses heavily on practical aspects of DEC, with an emphasis on implementation and applications.

The course begins with the core ideas from exterior calculus, in both the smooth and discrete setting. Then it shows how a large number of fundamental geometry-processing tools (smoothing, parameterization, geodesics, mesh optimization, etc.) can be implemented quickly, robustly, and efficiently within this single common framework. It concludes with a discussion of recent extensions of DEC that improve efficiency, accuracy, and versatility.

The course notes grew out of the discrete differential geometry course taught over the past five years at the California Institute of Technology, for undergraduates and beginning graduate students in computer science, applied mathematics, and associated fields. The notes also provide guided exercises (both written and coding) that attendees can later use to deepen their understanding of the material.

Numerical Methods for Linear Complementarity Problems in Physics-Based Animation
Kenny Erleben (Københavns Universitet)

Notes: erleben-notes.pdf

This course provides an introduction to the definition of linear complementarity problems (LCPs) and outlines the derivation of a toolbox of numerical methods. It also presents a small convergence study on the methods to illustrate their numerical properties. The course is a good introduction to implementing numerical methods, because it includes tips and tricks for implementation based on considerable practical experience.

Ray Tracing is the Future and Ever Will Be
Alexander Keller (NVIDIA Research)
Tero Karras (NVIDIA Research)
Ingo Wald (Intel Corporation)
Timo Aila (NVIDIA Research)
Samuli Laine (NVIDIA Research)
Jacco Bikker (NHTV University of Applied Sciences Breda)
Christiaan Gribble (SURVICE Engineering Company)
Won-Jong Lee (Samsung Advanced Institute of Technology)
James McCombe (Imagination Technologies Limited)

Notes: keller-notes.pdf

The primary objective of this course is to present a coherent summary of the state of the art in ray tracing technology. The course covers the most recent developments and practical aspects of the parallel construction of acceleration data structures and traversal of such acceleration data structures using highly parallel processors, including a discussion of divergent code paths and memory accesses as well as occupancy. Ray tracing in real-time games is considered one of the main application opportunities, but an important part of the course focuses on hardware for ray tracing applications in mobile platforms.

Story: It's Not Just for Writers ... Anymore
Craig Caldwell (University of Utah)

Notes: caldwell-notes.pdf

When studios say "it is about the story!" everyone nods in agreement. But story creation remains a mystery for many in computer animation, VFX, and games because they have not focused on screenwriting. This course covers the universal elements of story: plot, characters, and distinctive narrative structure. It analyzes conflict (internal, external, environmental), turning points, cause and effect, archetypes vs. stereotypes, inciting incidents, and how choice defines character. In also reviews the questions raised in all stories:

  • What is at stake (survival, safety, love, esteem, etc.)?
  • What is the motivation (inciting incident) of the main character (protagonist)?
  • Will that be enough to move the main character from ordinary, comfortable life to a different world (where the action takes place)?
  • What "changes" are necessary to make the story dramatic?

Advances in New Interfaces for Musical Expression
Sidney Fels (The University of British Columbia)
Michael Lyons (Ritsumeikan University)

Notes: fels-notes.pdf

Advances in digital audio technologies have led to a situation where computers now play a role in most music production and performance. Digital technologies offer unprecedented opportunities for creation and manipulation of sound, but the flexibility of these new technologies implies an often confusing array of choices for composers and performers. Some artists have responded by using computers directly to create music, which has generated an explosion of new musical forms. However, most would agree that the computer is not a musical instrument, in the same sense as traditional instruments, and it is natural to ask "how to play the computer" using interface technology appropriate for human brains and bodies.

In 2001, we organized the first workshop on New Interfaces for Musical Expression (NIME) to attempt to answer this question by exploring connections with the better-established field of human-computer interaction. This course summarizes what has been learned at the annual NIME conferences since that first workshop. It begins with an overview of the theory and practice of new musical-interface design and explores what makes a good musical interface and whether there are any useful design principles or guidelines available. Topics include mapping from human action to musical output, control intimacy, and tools for creating musical interfaces (sensors and microcontrollers, audio synthesis techniques, and communication protocols such as Open Sound Control and MIDI). The remainder of the course consists of several case studies that represent the major broad themes of the NIME conference, including augmented and sensor-based instruments, mobile and networked music, and NIME pedagogy.

Advances in Real-Time Rendering in Games Part I
Natasha Tatarchuk (Bungie Studios)

Advances in real-time graphics research and the ever-increasing power of mainstream GPUs and consoles continue to generate an explosion of innovative algorithms suitable for fast, interactive rendering of complex and engaging virtual worlds. Every year, the latest video games employ a vast variety of sophisticated algorithms to produce ground-breaking 3D rendering that pushes the visual boundaries and interactive experience of rich environments.

Lights! Speed! Action! Fundamentals of Physical Computing for Programmers
Erik Brunvand (University of Utah)

Notes: brunvand-notes.pdf
Auxiliary Material: brunvand-datasheets.zip brunvand-programs.zip

The definition of "computer graphics" as used by artists in new media and kinetic areas of the arts is much more expansive than simply rendering to a screen. A visit to the SIGGRAPH 2013 Art Gallery, for example, reveals a wide variety of uses of physical computing, embedded control, sensors, and actuators in the service of art. This course is for programmers, educators, artists, and others who would like to learn the basic skills necessary to include physical components in their computing systems.

The course is targeted at programmers with little or no electronics background. It begins with basic electronics concepts as they are used with physical computing components, then reviews a variety of sensors that provide information about the physical environment (light, motion, distance from objects, flex, temperature, etc.), programmer-controlled lights (LEDs), and programmer-controlled motion (servos, motors). The use of these components is described in the context of the Arduino microcontroller, but the topics are general and will transfer to a variety of other computing platforms. Although the course includes a few simple formulas, the strong focus will be on practical usage and common-sense applications in real circuits.

Combining GPU Data-Parallel Computing With OpenGL
Mike Bailey (Oregon State University)

Notes: bailey-notes.pdf

Data-parallel computing is a programming paradigm in which the same analysis code is applied to different data elements. Many applications in visual computing fall into this category, such as particle systems, image processing, chain models, cloth models, flow analysis, and structural modeling. And, because of the nature of graphics architectures, GPUs are the natural place to perform such operations quickly.

If you are an OpenGL programmer, you have two options:

  • OpenCL, an industry-wide standard created by the Khronos Group, the same body that governs OpenGL, has been available for several years.
  • OpenGL compute shaders became available in the summer of 2012 in the OpenGL 4.3 release.

The presence of these two multi-vendor solutions allows application developers considerable flexibility to examine their needs and choose the solution that most closely matches. These two solutions are especially important because each can use OpenGL buffers for their data storage. This means that the data never leave the GPU. They are local for both the computing and the rendering, increasing the speed of the application.

This course examines both solutions, shows how each can be used to solve data-parallel computing problems, and explains how each interfaces with OpenGL's rendering.

Advances in Real-Time Rendering in Games Part II
Natasha Tatarchuk (Bungie Studios)

Advances in real-time graphics research and the ever-increasing power of mainstream GPUs and consoles continue to generate an explosion of innovative algorithms suitable for fast, interactive rendering of complex and engaging virtual worlds. Every year, the latest video games employ a vast variety of sophisticated algorithms to produce ground-breaking 3D rendering that pushes the visual boundaries and interactive experience of rich environments.

OpenSubdiv From Research to Industry Adoption
Charles Loop (Microsoft Research)
Dirk Van Gelder (Pixar Animation Studios)
Nathan Litke (DigitalFish, Inc.)
Rachid El Guerrab (Motorola Mobility LLC)
Baback Elmieh (Motorola Mobility LLC)
Manuel Kraemer (Pixar Animation Studios)

Catmull-Clark Subdivision Surfaces were invented in the 1970s. The specification was extended with local edits, creases, and other features, and formalized into a usable technique for animation in 1998. But the extended technology has not enjoyed widespread adoption in animation for a variety of reasons. Recently, Pixar decided to release its subdivision patents and working codebase, in the hope that giving away its high-performance GPU-accelerated code will create a standard for geometry throughout the animation industry.

This course answers several questions:

  • What is a subdivision surface?
  • What are the extended features, and how exactly do they work?
  • What is the Feature Adaptive algorithm, and how does it make the surfaces useful on GPUs?
  • What is OpenSubdiv, and how does it implement these algorithms?
  • How do you integrate OpenSubdiv into an application and a pipeline?
  • What are the challenges and solutions associated with putting OpenSubdiv on a mobile device?

Multithreading and VFX
James Reinders (Intel Corporation)
George ElKoura (Pixar Animation Studios)
Erwin Coumans (Advanced Micro Devices, Inc.)
Ron Henderson (DreamWorks Animation)
Martin Watt (DreamWorks Animation)
Jeff Lait (Side Effects Software Inc.)

Notes: reinders-notes.pdf

Parallelism is important to many aspects of visual effects. In this course, experts in several key areas present their specific experiences in applying parallelism to their domain of expertise. The problem domains are very diverse, and so are the solutions employed, including specific threading methodologies. This allows the audience to gain a wide understanding of various approaches to multithreading and compare different techniques, which provides a broad context of state-of-the-art approaches to implementing parallelism and helps them decide which technologies and approaches to adopt for their own future projects. The presenters describe both successes and pitfalls, the challenges and difficulties they encountered, and the approaches they adopted to resolve these issues.

The course begins with an overview of the current state of parallel programming, followed by five presentations on various domains and threading approaches. Domains include rigging, animation, dynamics, simulation, and rendering for film and games, as well as a threading implementation for a full-scale commercial application that covers all of these areas. Topics include CPU and GPU programming, threading, vectorization, tools, debugging techniques, and optimization and performance-profiling approaches.

Efficient Real-Time Shadows
Elmar Eisemann (Delft University of Technology)
Ulf Assarsson (Chalmers University of Technology)
Michael Schwarz (Weta Digital)
Michal Valient (Guerrilla Games)
Michael Wimmer (Technische Universität Wien)

Notes: eisemann-notes.pdf

This course provides an overview of efficient, real-time shadow algorithms. It presents the theoretical background but also discusses implementation details for facilitating efficient realizations ( hard and soft shadows, volumetric shadows, reconstruction techniques). These elements are of relevance to both experts and practitioners. The course also reviews budget considerations and analyzes performance trade-offs, using examples from various AAA game titles and film previsualization tools. While physical accuracy can sometimes be replaced by plausible shadows, especially for games, film production requires more pecision, such as scalable solutions that can deal with highly detailed geometry.

The course builds upon earlier SIGGRAPH courses as well as the recent book Real-Time Shadows (A K Peters, 2011) by four of the instructors (due to its success, a second edition is planned for 2014). And with two instructors who have worked on AAA game and movie titles, the course presents interesting behind-the-scenes information that illuminates key topics.

OpenVDB: An Open-Source Data Structure and Toolkit for High-Resolution Volumes
Ken Museth (DreamWorks Animation)
Jeff Lait (Side Effects Software Inc.)
John Johanson (Digital Domain 3.0, Inc.)
Jeff Budsberg (DreamWorks Animation)
Ron Henderson (DreamWorks Animation)
Mihai Alden (DreamWorks Animation)
Peter Cucka (DreamWorks Animation)
David Hill (DreamWorks Animation)
Andrew Pearce (DreamWorks Animation)

Notes: museth-notes.pdf

OpenVDB has already been integrated into the next major release of the high-end 3D animation package Houdini, and there is anecdotal evidence that many of the major VFX and production houses are in the process of either evaluating or adopting VDB. This course presents a comprehensive overview of OpenVDB, an open-source C++ library comprising a novel hierarchical data structure and a suite of tools for efficient storage and manipulation of sparse volumetric data discretized on three-dimensional grids. .

A Practical Guide to Art/Science Collaborations
Daria Tsoupikova (University of Illinois at Chicago)
Helen-Nicole Kostis
Dan Sandin (University of Illinois at Chicago)

Notes: tsoupikova-notes.pdf

This practical guide to art/science (A/S) collaborations advises artists and scientists on how to foster, prepare for, and improve interdisciplinary work. The current state of the art is presented by Dan Sandin, a computer graphics pioneer who has been working across disciiplinary boundaries for 35 years. Then the course introduces the instructors' collaborative research study on the important elements of successful A/S collaboration. It concludes with three case studies of collaborative A/S initiatives and critical practical advice.

Dynamic 2D/3D Registration for the Kinect
Sofien Bouaziz (École polytechnique fédérale de Lausanne)
Mark Pauly (École polytechnique fédérale de Lausanne)

Notes: bouaziz-notes.pdf

Image and geometry registration algorithms are essential components of many computer graphics and computer vision systems. With recent technological advances in RGB-D sensors, robust algorithms that combine 2D image and 3D geometry registration have become an active area of research.

This course introduces the basics of 2D/3D registration algorithms and provides theoretical explanations and practical tools for designing computer vision and computer graphics systems based on RGB-D devices such as the Microsoft Kinect or Asus Xtion Live. To illustrate the theory and demonstrate practical relevance, the course briefly discusses three applications: rigid scanning, non-rigid modeling, and real-time face tracking.

Physically Based Shading in Theory and Practice
Stephen McAuley (Ubisoft Entertainment)
Stephen Hill (Ubisoft Entertainment)
Adam Martinez (Sony Pictures Imageworks)
Ryusuke Villemin (Pixar Animation Studios)
Matt Pettineo (Ready at Dawn Studios, LLC)
Dimitar Lazarov (Treyarch)
David Neubelt (Ready at Dawn Studios, LLC)
Brian Karis (Epic Games, Inc.)
Christophe Hery (Pixar Animation Studios)
Naty Hoffman (2K)
H&adot;kan Zap Andersson (Autodesk Inc.)

Notes: mcauley-notes.pdf

Physically based shading is increasingly important in both film and game production. By adhering to physically based, energy-conserving shading models, one can easily create high-quality, realistic materials that maintain their quality in a variety of lighting environments. Traditional “ad-hoc” models have required extensive tweaking to achieve the same result, so it is no surprise that physically based models have increased in popularity, particularly because they are often no more difficult to implement or evaluate. Since last year's course (Practical Physically Based Shading in Film and Game Production, SIGGRAPH 2012), many advances have been made in this field, and once again game and film studios present their latest research and techniques.

The course begins with a brief introduction to the physics and mathematics of shading before speakers share examples of how physically based shading models have been used in production. The course introduces new research, explains its practical application in production, and discusses the advantages and disadvantages based on real-world examples.

Rendering Massive Virtual Worlds
Graham Sellers (Advanced Micro Devices, Inc.)
Juraj Obert (Advanced Micro Devices, Inc.)
Patrick Cozzi (University of Pennsylvania)
Kevin Ring (Analytical Graphics, Inc.)
Emil Persson (Avalanche Studios)
Joel de Vahl (Avalanche Studios)
J.M.P. van Waveren (Id Software, LLC)

Notes: sellers-notes.pdf

This course is presented in four sections. The first two presentations show how huge data sets can be streamed and displayed in real time for virtual-globe rendering inside a web browser. Topics include pre-processing, storage, and transmission of real-world data, plus cache hierarchies and efficient culling algorithms.

The third section reviews content generation using a combination of procedural and artist-driven techniques. It explores integration of content-generation applications into production tool chains and their use in creation of real-world video games. Topics include productivity, data dependencies, and the trade-offs of putting massive procedural content generation into production.

The fourth section covers recent advances in graphics hardware architecture that allow GPUs to virtualize graphics resources (specifically, textures) by leveraging virtual memory. It discusses augmentation of traditional graphics APIs and presents several use cases and examples.

The final presentation shows how support for hardware-assisted virtual texturing was integrated into a game engine. It reviews the challenges associated with ensuring that the engine continued to operate efficiently on hardware that does not support virtual texturing. It also illustrates the concessions made in the engine for limitations of existing hardware and proposes some future enhancements that would improve the usability of the solution.